这项工作考虑了嵌套形式的功能组成优化,而每个函数都包含期望。这种类型的问题是在诸如增强学习中的策略评估或元学习中的模型定制中越来越受欢迎。不能直接应用用于非复合优化的标准riemannian随机梯度方法,因为内部功能的随机近似在外部函数的梯度中造成了偏见。为了进行两级组成优化,我们提出了一个Riemannian随机成分梯度下降(R-SCGD)方法,该方法找到了一个近似的固定点,预期平方的Riemannian梯度小于$ \ epsilon $,in $ O(\ epsilon^{-2 {-2) })$调用内部功能的外部功能和随机函数的随机梯度甲骨文的呼叫。此外,我们将R-SCGD算法概括为多层嵌套组成结构的问题,对于一阶随机甲骨文而言,具有$ O(\ epsilon^{ - 2})$的复杂性相同。最后,对R-SCGD方法的性能进行了数值评估,该方法在强化学习中的策略评估问题上进行了数值评估。
translated by 谷歌翻译
多发性硬化症(MS)是一种慢性神经炎症性疾病,多模态MRIS通常用于监测MS病变。许多自动MS病变细分模型已经开发并达到了人类水平的性能。但是,大多数已建立的方法都假定在训练过程中使用的MRI模式在测试过程中也可以使用,这在临床实践中不能保证。以前,已将称为模式辍学的训练策略应用于MS病变细分,以实现最先进的性能,而缺失了模态。在本文中,我们提出了一种称为ModDrop ++的新方法,以训练统一的网络适应于任意数量的输入MRI序列。 ModDrop ++以两种关键方式升级ModDrop的主要思想。首先,我们设计一个插件动态头,并采用过滤器缩放策略来提高网络的表现力。其次,我们设计了一种共同训练策略,以利用完全模态和缺失方式之间的主体内关系。具体而言,主体内共同训练策略旨在指导动态头部在同一主题的全模式数据和缺失模式数据之间生成相似的特征表示。我们使用两个公共MS数据集来显示ModDrop ++的优势。源代码和训练有素的模型可在https://github.com/han-liu/moddropplusplus上获得。
translated by 谷歌翻译
磁共振图像(MRI)被广泛用于量化前庭切片瘤和耳蜗。最近,深度学习方法显示了用于分割这些结构的最先进的性能。但是,培训细分模型可能需要目标域中的手动标签,这是昂贵且耗时的。为了克服这个问题,域的适应是一种有效的方法,可以利用来自源域的信息来获得准确的分割,而无需在目标域中进行手动标签。在本文中,我们提出了一个无监督的学习框架,以分割VS和耳蜗。我们的框架从对比增强的T1加权(CET1-W)MRI及其标签中利用信息,并为T2加权MRIS产生分割,而目标域中没有任何标签。我们首先应用了一个发电机来实现图像到图像翻译。接下来,我们从不同模型的集合中集合输出以获得最终的分割。为了应对来自不同站点/扫描仪的MRI,我们在培训过程中应用了各种“在线”增强量,以更好地捕获几何变异性以及图像外观和质量的可变性。我们的方法易于构建和产生有希望的分割,在验证集中,VS和耳蜗的平均骰子得分分别为0.7930和0.7432。
translated by 谷歌翻译
价格运动的预测旨在根据当前的市场条件和其他相关信息来预测金融资产的未来趋势。最近,机器学习(ML)方法已经变得越来越流行,并在学术界和工业中都取得了预测的有希望的结果。大多数现有的ML解决方案将预测问题作为分类(预测方向)或回归(以预测回报)问题,以期在整个培训数据集中。但是,由于财务数据的信噪比和随机性质极低,良好的交易机会极为稀缺。结果,如果没有仔细选择潜在的有利可图的样本,这种ML方法容易捕获噪声而不是真实信号的模式。为了解决这个问题,我们提出了一个新颖的价格变动预测框架,称为“地方意识到的关注和迭代精致标签”(LARA),由两个主要组成部分组成:1)局部意识 - 引起关注会自动提取潜在的有利可图的样品,以通过到周围的周围来提取。班级感知标签信息。此外,配备了公制学习技术,当地意识到的注意力享受特定于任务的距离指标,并以更有效的方式分散了对潜在有利可图的样本的关注。 2)迭代精致标签进一步迭代地完善了嘈杂样品的标签,然后结合了学到的预测因子,使其与看不见和嘈杂的样品相结合。在对三个现实世界金融市场的许多实验中:ETF,股票和加密货币,Lara与传统的时间序列分析方法和QLIB平台上的一组基于机器的竞争对手相比,取得了卓越的性能。广泛的消融研究和实验还表明,拉拉确实捕获了更可靠的交易机会。
translated by 谷歌翻译
最近,深度学习方法已经在许多医学图像分割任务中实现了最先进的表现。其中许多是基于卷积神经网络(CNN)。对于这种方法,编码器是从输入图像中提取全局和局部信息的关键部分。然后将提取的特征传递给解码器以预测分割。相比之下,最近的几部作品显示了使用变压器的卓越性能,可以更好地对远程空间依赖性进行建模并捕获低级细节。但是,对于某些任务无法有效替换基于卷积的编码器的某些任务,变形金刚作为唯一的编码器表现不佳。在本文中,我们提出了一个带有双重编码器的模型,用于3D生物医学图像分割。我们的模型是带有独立变压器编码器的U形CNN。我们融合了卷积编码器和变压器的信息,并将其传递给解码器以获得结果。我们从三个不同的挑战中评估了三个公共数据集上的方法:BTCV,MODA和DECHANLON。与在每个任务上有和没有变压器的最先进模型相比,我们提出的方法在整个方面都获得了更高的骰子分数。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译